human brain
We're about to simulate a human brain on a supercomputer
We're about to simulate a human brain on a supercomputer The world's most powerful supercomputers can now run simulations of billions of neurons, and researchers hope such models will offer unprecedented insights into how our brains work What would it mean to simulate a human brain? Today's most powerful computing systems now contain enough computational firepower to run simulations of billions of neurons, comparable to the sophistication of real brains. We increasingly understand how these neurons are wired together, too, leading to brain simulations that researchers hope will reveal secrets of brain function that were previously hidden. Researchers have long tried to isolate specific parts of the brain, modelling smaller regions with a computer to explain particular brain functions. But "we have never been able to bring them all together into one place, into one larger brain model where we can check whether these ideas are at all consistent", says Markus Diesmann at the Jülich Research Centre in Germany.
- Europe > Germany (0.26)
- North America > United States (0.05)
- North America > Greenland (0.05)
- Arctic Ocean (0.05)
Brain Dissection: fMRI-trained Networks Reveal Spatial Selectivity in the Processing of Natural Images
The alignment between deep neural network (DNN) features and cortical responses currently provides the most accurate quantitative explanation for higher visual areas. At the same time, these model features have been critiqued as uninterpretable explanations, trading one black box (the human brain) for another (a neural network). In this paper, we train networks to directly predict, from scratch, brain responses to images from a large-scale dataset of natural scenes (Allen et.
Incorporating Context into Language Encoding Models for fMRI
Language encoding models help explain language processing in the human brain by learning functions that predict brain responses from the language stimuli that elicited them. Current word embedding-based approaches treat each stimulus word independently and thus ignore the influence of context on language understanding. In this work we instead build encoding models using rich contextual representations derived from an LSTM language model. Our models show a significant improvement in encoding performance relative to state-of-the-art embeddings in nearly every brain area. By varying the amount of context used in the models and providing the models with distorted context, we show that this improvement is due to a combination of better word embeddings learned by the LSTM language model and contextual information. We are also able to use our models to map context sensitivity across the cortex. These results suggest that LSTM language models learn high-level representations that are related to representations in the human brain.
Advancing Spiking Neural Networks for Sequential Modeling with Central Pattern Generators
Spiking neural networks (SNNs) represent a promising approach to developing artificial neural networks that are both energy-efficient and biologically plausible.However, applying SNNs to sequential tasks, such as text classification and time-series forecasting, has been hindered by the challenge of creating an effective and hardware-friendly spike-form positional encoding (PE) strategy.Drawing inspiration from the central pattern generators (CPGs) in the human brain, which produce rhythmic patterned outputs without requiring rhythmic inputs, we propose a novel PE technique for SNNs, termed CPG-PE.We demonstrate that the commonly used sinusoidal PE is mathematically a specific solution to the membrane potential dynamics of a particular CPG.Moreover, extensive experiments across various domains, including time-series forecasting, natural language processing, and image classification, show that SNNs with CPG-PE outperform their conventional counterparts.Additionally, we perform analysis experiments to elucidate the mechanism through which SNNs encode positional information and to explore the function of CPGs in the human brain.This investigation may offer valuable insights into the fundamental principles of neural computation.
Scaling and context steer LLMs along the same computational path as the human brain
Raugel, Joséphine, d'Ascoli, Stéphane, Rapin, Jérémy, Wyart, Valentin, King, Jean-Rémi
Recent studies suggest that the representations learned by large language models (LLMs) are partially aligned to those of the human brain. However, whether and why this alignment score arises from a similar sequence of computations remains elusive. In this study, we explore this question by examining temporally-resolved brain signals of participants listening to 10 hours of an audiobook. We study these neural dynamics jointly with a benchmark encompassing 22 LLMs varying in size and architecture type. Our analyses confirm that LLMs and the brain generate representations in a similar order: specifically, activations in the initial layers of LLMs tend to best align with early brain responses, while the deeper layers of LLMs tend to best align with later brain responses. This brain-LLM alignment is consistent across transformers and recurrent architectures. However, its emergence depends on both model size and context length. Overall, this study sheds light on the sequential nature of computations and the factors underlying the partial convergence between biological and artificial neural networks.
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Health & Medicine > Therapeutic Area > Neurology (0.67)
- Health & Medicine > Health Care Technology (0.48)
Revealed: The five key stages of the human brain - with the 'adolescent' phase lasting until age 32
'Guerilla' liberals form a'Fight Club' to oust Schumer after walking right into Trump's Oval Office trap Billionaire family posts VERY unusual obituary after heir, 40, met violent end at $2.8m hunting lodge following marriage scandal I know why Usha Vance ditched her wedding ring. Most women would do the same if they'd suffered her humiliation: KENNEDY'Canceled' comedian Louis C.K. devours Hollywood legend's widow on streets of NYC as steamy romance is revealed Troubled 350lbs son of Hollywood icon is forced to humiliating new low... as his movie star brother luxuriates in $7m Montecito mansion Brigitte Bardot, 91, is rushed to hospital again as she battles a'serious illness' after undergoing surgery'Dementia gene' now linked to another devastating neurological disease, study shows Trump's losing control... MAGA's imploding... and White House insiders tell me why they're REALLY worried: ANDREW NEIL Anna Kepner's grim cause of death aboard Carnival cruise ship confirmed, as homicide investigation continues Dawson's Creek star James Van Der Beek looks healthy in new social media video as his wife gushes'he's bouncing back' amid cancer battle Her moving videos about the handsome boyfriend who ghosted her went viral and catapulted her to overnight fame. Pam Bondi's furious response after beauty queen prosecutor who upstaged her has Comey and James indictments thrown out by judge Google Maps blunder turns tiny village into shortcut route, causing it to be'bombarded' by lorries that are damaging people's Grade II-listed homes READ MORE: Scientists issue warning over mind-altering'brain weapons' There are five key stages of the human brain, a new study has revealed. Researchers from the University of Cambridge compared brain scans of 3,802 people aged between 0 and 90. Their analysis revealed that the average human life is split up by four pivotal'turning points' between five key stages - childhood, adolescence, adulthood, early ageing, and late ageing.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.24)
- North America > Canada > Alberta (0.14)
- North America > United States > Nevada > Clark County > Las Vegas (0.05)
- (18 more...)
- Media > Film (1.00)
- Leisure & Entertainment (1.00)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Government > Regional Government > North America Government > United States Government (0.93)
Incorporating Context into Language Encoding Models for fMRI
Language encoding models help explain language processing in the human brain by learning functions that predict brain responses from the language stimuli that elicited them. Current word embedding-based approaches treat each stimulus word independently and thus ignore the influence of context on language understanding. In this work we instead build encoding models using rich contextual representations derived from an LSTM language model. Our models show a significant improvement in encoding performance relative to state-of-the-art embeddings in nearly every brain area. By varying the amount of context used in the models and providing the models with distorted context, we show that this improvement is due to a combination of better word embeddings learned by the LSTM language model and contextual information. We are also able to use our models to map context sensitivity across the cortex. These results suggest that LSTM language models learn high-level representations that are related to representations in the human brain.
Lab-grown models of human brains are advancing rapidly. Can ethics keep pace?
Pacific Grove, California--Pop a few human stem cells into culture, provide the right molecular signals, and before long a mock cerebral cortex or a cerebellum knockoff could be floating in the medium. These neural, or brain, organoids, typically just a few millimeters across, are not "brains in a dish," as some journalists have described them. But they are becoming ever more sophisticated and true to life, capturing more of the brain's cellular and structural intricacy. "It's surprising how far this [area] has advanced in the last year," says John Evans, a sociologist at the University of California San Diego who follows the research and public opinions on it. That progress has allowed researchers to delve deeper into how the human brain develops, functions, and goes awry in diseases, but it has also sharpened ethical questions.
- North America > United States > California > San Diego County > San Diego (0.25)
- North America > United States > California > Monterey County > Pacific Grove (0.25)
- South America (0.05)
- (5 more...)
On the Analogy between Human Brain and LLMs: Spotting Key Neurons in Grammar Perception
Norouzi, Sanaz Saki, Masjedi, Mohammad, Hitzler, Pascal
Artificial Neural Networks, the building blocks of AI, were inspired by the human brain's network of neurons. Over the years, these networks have evolved to replicate the complex capabilities of the brain, allowing them to handle tasks such as image and language processing. In the realm of Large Language Models, there has been a keen interest in making the language learning process more akin to that of humans. While neuroscientific research has shown that different grammatical categories are processed by different neurons in the brain, we show that LLMs operate in a similar way. Utilizing Llama 3, we identify the most important neurons associated with the prediction of words belonging to different part-of-speech tags. Using the achieved knowledge, we train a classifier on a dataset, which shows that the activation patterns of these key neurons can reliably predict part-of-speech tags on fresh data. The results suggest the presence of a subspace in LLMs focused on capturing part-of-speech tag concepts, resembling patterns observed in lesion studies of the brain in neuroscience.
- North America > United States > Kansas (0.04)
- Asia > Middle East > UAE > Abu Dhabi Emirate > Abu Dhabi (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (0.93)